CIUs in conversation

This document summarizes analyses on CIUs in conversation.

Load packages

Read in data:

across <- read_csv(here("Data", 'conversation.csv')) %>%
  janitor::clean_names() %>%
  pivot_wider(names_from = variable, values_from = pwa_score) %>%
  rename(words = Words, cius = CIUs) %>%
  select(participant, group, cop, spica_percent_ile, time_point, words, cius, ProporCIUs) %>%
  ungroup()

Model preparation

This prep is the same as the CIUs in structured discoruse

m.across <- across %>%
  mutate(time_point = as.factor(ifelse(time_point == 'Entry', 0, 
                             ifelse(time_point == 'Exit', 1, 2)))) %>%  # wider data
  mutate(obs = row_number())

m.across$severity.z = scale(m.across$spica_percent_ile)[,1] # z-score severity
contrasts(m.across$time_point) <- contr.treatment(3)# set categorical variable contrasts
colnames(contrasts(m.across$time_point)) <- c('pre_vs_post', 'pre_vs_fu') # name contrasts

Visualize change

propor <- m.across%>%
  mutate(proportion = cius/words,
         var = 'prop',
         time_point = as_numeric(time_point)) %>%
  select(participant, time_point, score = proportion,var) 

p1 = propor

p1%>%
  drop_na() %>%
  ggplot(aes(x = time_point, y = score, group = participant)) +
  geom_line(alpha = .5) +
  #facet_wrap(~var, scales = 'free') +
  scale_x_continuous(labels = c('Entry', 'Exit', 'Follow Up'),
                     breaks= c(0, 1, 2),
                     limits = c(-.2,2.2)) +
  #scale_y_continuous(limits = c(0,1), breaks = seq(0,1, .2)) +
  theme_grey(base_size = 14) +
  xlab(NULL)  +
  ylab(NULL) +
  scale_y_continuous(labels = scales::percent, limits = c(0,1)) +
  ggtitle('Proportion CIUs')

Mean proportion cius at each time point:

p1 %>%
  drop_na() %>%
  group_by(time_point) %>%
  summarize(mean_percent_cius = mean(score)) %>%
  kable()
time_point mean_percent_cius
0 0.5161065
1 0.5386939
2 0.5573241

Model

I ran additional models with random effects for group and for conversation partner without much success with model convergence. In these cases, I also added weakly informative priors without success. Given the relatively small number of observations, these additional random effects appear to add too much complexity for the model. The current model structure below is equivalent to the converging model in the CIUs in structured discourse, with default priors.

As a side note, there was also no evidence for any pre-treatment to treatment entry change in the delayed treatment group.

across.binom <- brm(cius | trials(words) ~ time_point*severity.z + (time_point|participant),
                    data = m.across,
                    family = binomial(link = 'logit'),
                    iter = 4000,
                    #control = list(adapt_delta = .9),
                    warmup = 1000,
                    chains = 4,
                    cores = 4,
                    backend = "cmdstan",
                    inits = 'random',
                    save_all_pars = TRUE,
                    silent = T, 
                    refresh = 0)
Running MCMC with 4 parallel chains...

Chain 1 finished in 11.1 seconds.
Chain 4 finished in 11.3 seconds.
Chain 3 finished in 11.7 seconds.
Chain 2 finished in 11.9 seconds.

All 4 chains finished successfully.
Mean chain execution time: 11.5 seconds.
Total execution time: 12.4 seconds.

This model was checked for overdispersion (none) as well as rhat, effective sample size, and converging chains. All model convergence indicators suggest that the model converged. The posterior predictive check is below:

brms::pp_check(across.binom, nsamples = 200)

Model Results

Parameter Estimate Std. Error 90% CI
Population level effects
Intercept -0.03 0.24 -0.43 , 0.36
Time point: entry to exit 0.01 0.17 -0.27 , 0.28
Time point: entry to follow up 0.16 0.17 -0.11 , 0.43
Severity 0.71 0.24 0.32 , 1.12
Time point: entry to exit : severity 0.24 0.18 -0.06 , 0.54
Time point: entry to follow up : severity 0.22 0.18 -0.07 , 0.53
Group level effects (participant)
sd: intercept 1.10 0.24 0.78 , 1.54
sd: time point: entry to exit 0.66 0.15 0.46 , 0.93
sd: time point: entry to follow up 0.65 0.15 0.45 , 0.92

The results do not provide evidence for any group-level effecets from entry to exit (in fact probably suggest the lack thereof). Weak evidence of change at follow up, but note the error estimate is essentially the same as the coefficient. The effect size here is smaller than the CIU in structured conversation as well (if I recall, ~.25 at exit and ~.35 at followup).

They do, however, suggest that severity does moderate the effect of treatment, such that people with milder aphasia are more likely to demonstrate improvements in converstional CIUs, which is notable in the individual effect plot below…

Individual Effect Sizes

Next, I adapted the code from the CIUs in structured discourse to calculate individual effect sizes for this model. Data is imputed where missing based on predictions. We should discuss whether this is relevant for the participant with no exit or f/u data. I have hidden the code in this document given it’s length.

Compare to structured CIUs

Next, I compared the conversational CIU outcome measures to the sturcutred CIU outcome measures. Note you can hover over each datapoint to see which participant it refers too.

Exit:

plot_ly(data = exit, x = ~y_structured, y = ~y_conversation,
        type = "scatter", mode = "markers", color = ~spica,
        hoverinfo = 'text',
        text = ~participant) 

Followup:

plot_ly(data = followup, x = ~y_structured, y = ~y_conversation,
        type = "scatter", mode = "markers", color = ~spica,
        hoverinfo = 'text',
        text = ~participant) 

As you can see, there’s not much of a relationship here, which is a bit of a bummer. A good next step I thought was just to look at the correlation between structured CIUs and conversational CIUs at each timepoint. (coping code from last analysis)

all.raw %>%
  ggplot(aes(x = structured, y = conversation, fill = timepoint)) +
  geom_point(size = 3, pch = 21, color = "white") + 
  stat_smooth(inherit.aes = F, aes(x = structured, y = conversation), method = "lm", color = "black") + 
  stat_cor(inherit.aes = F, aes(x = structured, y = conversation)) +
  xlim(0,1) + ylim(0,1)

Thats a much stronger relationship…seems more likely that treatment effects are not related, rather than the measures? I wante dto check agreement, not just correlations…

irr::icc(all.raw[3:4], model = "twoway", type = "agreement")
 Single Score Intraclass Correlation

   Model: twoway 
   Type : agreement 

   Subjects = 65 
     Raters = 2 
   ICC(A,1) = 0.672

 F-Test, H0: r0 = 0 ; H1: r0 > 0 
 F(64,6.06) = 8.37 , p = 0.00604 

 95%-Confidence Interval for ICC Population Values:
  0.164 < ICC < 0.852

Agreement is in the ‘moderate at best’ range. In case you’re of the bland altmann persuasion:

all.raw$Avg <- (all.raw$structured + all.raw$conversation) / 2
all.raw$Dif <- all.raw$structured - all.raw$conversation

ggplot(all.raw, aes(x = Avg, y = Dif, fill = timepoint)) +
  geom_point( size = 3, pch = 21, color = "white") +
  geom_hline(yintercept = mean(all.raw$Dif, na.rm = T), colour = "black", size = 0.5) +
  geom_hline(yintercept = mean(all.raw$Dif, na.rm = T) - (1.96 * sd(all.raw$Dif, na.rm = T)), colour = "purple", size = 0.5) +
  geom_hline(yintercept = mean(all.raw$Dif, na.rm = T) + (1.96 * sd(all.raw$Dif, na.rm = T)), colour = "purple", size = 0.5) +
  ylab("Diff. Between Measures") +
  xlab("Average Measure")